转移学习技术和预先培训的最新进展,大型上下文编码器在包括对话助理在内的现实应用程序中促进了创新。意图识别的实际需求需要有效的数据使用,并能够不断更新支持意图,采用新的意图并放弃过时的意图。尤其是,对模型的广义零拍范例,该模型受到了可见意图的训练并在可见和看不见的意图上进行了测试,这是新的重要性。在本文中,我们探讨了用于意图识别的广义零拍设置。遵循零击文本分类的最佳实践,我们使用句子对建模方法对待任务。对于看不见的意图,使用意图标签和用户话语,而无需访问外部资源(例如知识库),我们的表现优于先前的最先进的F1量化,最多可达16 \%。进一步的增强包括意图标签的词汇化,可提高性能高达7%。通过使用从其他句子对任务(例如自然语言推论)转移的任务传输,我们会获得其他改进。
translated by 谷歌翻译
发展任务导向的对话助理的实用需求需要了解许多语言。多语言自然语言理解(NLU)的新型基准包括多种语言中的单声道句,用意图和插槽注释。在这种设置模型中,用于交叉传输在联合意图识别和槽填充方面表现出显着性能。然而,现有的基准缺乏代码切换话语,这难以收集和标签由于语法结构的复杂性。对于NLU模型的评估似乎偏见和有限,因为代码切换被遗漏了范围。我们的工作采用认可的方法来生成合理的和自然探测的代码切换话语,并使用它们来创建合成代码交换测试集。基于实验,我们报告说,最先进的NLU模型无法处理代码切换。在最糟糕的是,性能,通过语义精度评估,从横跨80 \%的8 \%的低至15 \%。此外,我们展示了,对合成码混合数据进行预训练有助于在具有单晶体数据的可比水平上保持所提出的测试中的性能。最后,我们分析了不同的语言对并表明语言越近,NLU模型越好地处理了交替。这符合对多语种模型在语言之间进行转移的共同理解
translated by 谷歌翻译
Missing values are a common problem in data science and machine learning. Removing instances with missing values can adversely affect the quality of further data analysis. This is exacerbated when there are relatively many more features than instances, and thus the proportion of affected instances is high. Such a scenario is common in many important domains, for example, single nucleotide polymorphism (SNP) datasets provide a large number of features over a genome for a relatively small number of individuals. To preserve as much information as possible prior to modeling, a rigorous imputation scheme is acutely needed. While Denoising Autoencoders is a state-of-the-art method for imputation in high-dimensional data, they still require enough complete cases to be trained on which is often not available in real-world problems. In this paper, we consider missing value imputation as a multi-label classification problem and propose Chains of Autoreplicative Random Forests. Using multi-label Random Forests instead of neural networks works well for low-sampled data as there are fewer parameters to optimize. Experiments on several SNP datasets show that our algorithm effectively imputes missing values based only on information from the dataset and exhibits better performance than standard algorithms that do not require any additional information. In this paper, the algorithm is implemented specifically for SNP data, but it can easily be adapted for other cases of missing value imputation.
translated by 谷歌翻译
During training, reinforcement learning systems interact with the world without considering the safety of their actions. When deployed into the real world, such systems can be dangerous and cause harm to their surroundings. Often, dangerous situations can be mitigated by defining a set of rules that the system should not violate under any conditions. For example, in robot navigation, one safety rule would be to avoid colliding with surrounding objects and people. In this work, we define safety rules in terms of the relationships between the agent and objects and use them to prevent reinforcement learning systems from performing potentially harmful actions. We propose a new safe epsilon-greedy algorithm that uses safety rules to override agents' actions if they are considered to be unsafe. In our experiments, we show that a safe epsilon-greedy policy significantly increases the safety of the agent during training, improves the learning efficiency resulting in much faster convergence, and achieves better performance than the base model.
translated by 谷歌翻译
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
It has been experimentally demonstrated that humans are able to learn in a manner that allows them to make predictions on categories for which they have not seen any examples (Malaviya et al., 2022). Sucholutsky and Schonlau (2020) have recently presented a machine learning approach that aims to do the same. They utilise synthetically generated data and demonstrate that it is possible to achieve sub-linear scaling and develop models that can learn to recognise N classes from M training samples where M is less than N - aka less-than-one shot learning. Their method was, however, defined for univariate or simple multivariate data (Sucholutsky et al., 2021). We extend it to work on large, high-dimensional and real-world datasets and empirically validate it in this new and challenging setting. We apply this method to learn previously unseen NLP tasks from very few examples (4, 8 or 16). We first generate compact, sophisticated less-than-one shot representations called soft-label prototypes which are fitted on training data, capturing the distribution of different classes across the input domain space. We then use a modified k-Nearest Neighbours classifier to demonstrate that soft-label prototypes can classify data competitively, even outperforming much more computationally complex few-shot learning methods.
translated by 谷歌翻译
我们提出了Rudsi,这是俄罗斯语言感官诱导(WSI)的新基准。该数据集是使用单词用法图(WUGS)的手动注释和半自动聚类创建的。与俄罗斯的先前WSI数据集不同,Rudsi完全由数据驱动(基于俄罗斯国家语料库的文本),没有对注释者强加的外部词感官。根据图聚类的参数,可以从原始注释中产生不同的导数数据集。我们报告了几种基线WSI方法在Rudsi上获得的性能,并讨论了改善这些分数的可能性。
translated by 谷歌翻译
为了在高移动性虚拟环境中实现柔软物体的高富度触觉渲染,我们提出了一种新颖的触觉显示dandeliontouch。一群无人机将触觉执行器传递给用户的指尖。 DandelionTouch的用户能够在不受设备工作区域限制的大空间中体验触觉反馈。重要的是,在与虚拟物体的长时间互动中,他们不会经历肌肉疲劳。手动跟踪和群控制算法允许用手动运动引导群,并避免在编队内部发生冲突。在这项研究中,研究了群体之间的阻抗连接的几种拓扑结构。该实验在实时在正方形轨迹上执行了一个遵循的实验,该实验表明,在恒星拓扑中连接的无人机执行了平均位置误差较低的轨迹(与其他阻抗拓扑相比,RMSE降低了20.6 \%与潜在的基于现场的群体控制相比,为40.9 \%。在所有具有阻抗行为的地层中,无人机的达到的速度比通过潜在场算法控制的群体高28%。此外,在与7名参与者的用户研究中评估了几种纤维骨架模式的感知。该研究表明,提议的时间延迟和频率调制的组合使用户可以同时成功识别VR中的表面特性和运动方向(平均识别率为70 \%,最大为93 \%)。 DandelionTouch建议在VR系统中提出一种新型的触觉反馈,无需手持或可穿戴界面。
translated by 谷歌翻译
相干显微镜技术提供了跨科学和技术领域的材料的无与伦比的多尺度视图,从结构材料到量子设备,从综合电路到生物细胞。在构造更明亮的来源和高速探测器的驱动下,连贯的X射线显微镜方法(如Ptychography)有望彻底改变纳米级材料的特征。但是,相关的数据和计算需求显着增加意味着,常规方法不再足以从高速相干成像实验实时恢复样品图像。在这里,我们演示了一个工作流程,该工作流利用边缘的人工智能和高性能计算,以实现直接从检测器直接从检测器流出的X射线ptychography数据实时反演。拟议的AI支持的工作流程消除了传统的Ptychography施加的采样约束,从而使用比传统方法所需的数据较少的数据级允许低剂量成像。
translated by 谷歌翻译